|
Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involve random objective functions or random constraints, for example. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.〔 〕 Stochastic optimization methods generalize deterministic methods for deterministic problems. ==Methods for stochastic functions== Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system,〔 〕 〔M.C. Campi and S. Garatti. ''The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs.'' SIAM J. on Optimization, 19, no.3: 1211–1230, 2008.()〕 and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include * stochastic approximation (SA), by Robbins and Monro (1951)〔 〕 * * stochastic gradient descent * * finite-difference SA by Kiefer and Wolfowitz (1952)〔 〕 * * simultaneous perturbation SA by Spall (1992)〔 〕 * * scenario optimization 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Stochastic optimization」の詳細全文を読む スポンサード リンク
|